Domino on Linux/Unix, Troubleshooting, Best Practices, Tips and more ...

alt

Daniel Nashed

Let’s Encrypt X509v3 Subject Key Identifier Changes to the first 160 bits of a SHA256 instead of a SHA1 hash

Daniel Nashed – 7 June 2025 22:14:00

Most admins don't pay much attention to Subject Key Identifiers (SKI).

But they have a special role in Domino 12 and higher and there are other important use cases in combination with
Authority Key Identifier.
CertStore for example uses the SKI as an unique identifier used to reference root certificates for trusts.


I noticed the SKI which is part of the X509 extension is different from what Domino calculates.
It took a moment to find out what is going on.


This change does not have any impact how those certificates and SKIs are handled in Domino.
Domino calculates the SKI in the same way for all certificates and.
The X509 extension isn't used for looking up certificates.
It's still important to know the background and the change.

To understand the changes, you need to first understand how SKIs work.

Most Certificate Authorities (CA) use a SHA1 hash of the raw public key.


Let's Encrypt now uses SHA256, truncated to the first 160 bits, to generate the Subject Key Identifier (SKI).

This is a departure from their previous practice of using SHA1, as outlined in RFC 5280.

This change ensures compatibility and aligns with RFC 7093.


Here is the background


https://community.letsencrypt.org/t/lets-encrypt-new-intermediate-certificates/209498?page=2

"we have updated the ceremony to use a SHA-256 hash truncated to the first 160 bits (the same length as a SHA-1 hash) for the SubjectKeyIdentifier, so that we can simultaneously stop using SHA-1 and not increase the size of the new certificates."



Explaining Subject Key Identifiers and Authority Key Identifiers


The X509v3 Subject Key Identifier and X509v3 Authority Key Identifier are X.509 certificate extensions defined in version 3 of the X.509 standard.

They are used to help uniquely identify keys and link certificates in a certificate chain, especially in public key infrastructure (PKI)


X509v3 Subject Key Identifier (SKI)


This extension provides a unique identifier for the public key contained in the certificate. It’s primarily used to:
  • Identify the certificate’s key (especially if the same subject name is reused in different certs).
  • Support building and verifying chains of trust.


How it's generated:


Typically, it's the SHA-1 hash of the DER-encoded SubjectPublicKey BIT STRING (just the raw key bytes, not the full X509_PUBKEY).


Example from openssl x509 -text:


X509v3 Subject Key Identifier:
B9:2F:D7:DA:9F:61:B8:F6:53:AD:D4:87:95:F4:AA:58:6B:B2:D0:F5



X509v3 Authority Key Identifier (AKI)


This extension helps identify which key was used to sign this certificate — i.e., it points to the issuer’s key. It's especially useful for:
  • Locating the correct issuing certificate in cases where the issuer’s name is ambiguous or reused.
  • Detecting and handling key rollovers (same issuer, different keys).

How it's constructed:


It can include one or more of the following:
  • Key Identifier – same method as SKI, but for the issuer's key.
  • Issuer name – from the issuer's certificate.
  • Issuer serial number – from the issuer’s certificate.


Example from openssl x509 -text:


X509v3 Authority Key Identifier:
keyid:88:B3:3A:E1:0B:FA:A7:6A:47:C5:BB:BF:0C:F0:71:ED:E4:3D:9D:E7



How They Work Together:


In a certificate chain:
  • The Subject Key Identifier (SKI) in a CA certificate identifies its public key.
  • The Authority Key Identifier (AKI) in a child certificate refers to that SKI, helping to verify which certificate signed it.

This matching is especially critical in:
  • Complex PKI environments with multiple CAs
  • Certificate revocation checking (e.g., CRLs, OCSP)


I have added calculation for SHA256 and SHA1 to my private certificate tool (nshcertool) to be able to check different formats.


---


Reference RFC 7093


https://datatracker.ietf.org/doc/html/rfc7093#section-2


Additional Methods for Generating Key Identifiers


[RFC5280] specifies two examples for generating key identifiers from

public keys.  Four additional mechanisms are as follows:


1) The keyIdentifier is composed of the leftmost 160-bits of the

 SHA-256 hash of the value of the BIT STRING subjectPublicKey

 (excluding the tag, length, and number of unused bits).


2) The keyIdentifier is composed of the leftmost 160-bits of the

 SHA-384 hash of the value of the BIT STRING subjectPublicKey

 (excluding the tag, length, and number of unused bits).


3) The keyIdentifier is composed of the leftmost 160-bits of the

 SHA-512 hash of the value of the BIT STRING subjectPublicKey

 (excluding the tag, length, and number of unused bits).


4) The keyIdentifier is composed of the hash of the DER encoding of

 the SubjectPublicKeyInfo value.


Domino container image use a different editor than vi

Daniel Nashed – 5 June 2025 23:01:58

This came up in a discussion at Engage conference.
The functionality is not new but maybe not well known.

I am personally a big fan of vi, because I use it since the early days of Linux.

But there are other editors like nano or mcedit from Midnight Commander (MC).
Not all distributions come with all editors available.
For example UBI does have nano included but no MC.

The container image supports adding any type of package available on the distribution.


Here is a command-line example how to install nano at build time:


-linuxpkg=nano


Once an alternate editor is installed, you can set environment variables in dominoctl (dominoctl env) passed to the running container.

The editor command all involved scripts support is

EDIT_COMMAND=nano


I just added another option which many Linux tools support:


EDITOR=nano


EDIT_COMMAND is checked first. If not set the EDITOR command is check.

If nothing is specified "vi" is used.


This option provides an easy way to customize the container image.

Sadly not every additional package is available on UBI.
Depending on which command you want to install (for example ncdu) Ubuntu could be the better choice.


-from=ubuntu


selects the latest Ubuntu LTS version as the base image for your Domino container image.

Nomad Web 1.0.16 Spell Check and LS2CAPI support

Daniel Nashed – 3 June 2025 16:55:41

Nomad Web 1.0.16 just shipped. It comes with two great new features.

Full spell check support and LS2CAPI.

There look & feel also has changed in detail.


I have updated the Domino Container project already and installed it in DNUG Lab updating the container image...



Image:Nomad Web 1.0.16 Spell Check and LS2CAPI support

Redhat Enterprise Linux 10 & UBI 10 available

Daniel Nashed – 1 June 2025 19:15:18

Earlier than announced Redhat Enterprise 10 and the Universal Base Image 10 are available.

There are no surprises because it is based on CentOS Stream 10, which I have looked into before.


RHEL 10 came out to late to be officially supported with the upcoming Domino 14.5 release June 17.


The Linux kernel has been upgraded from  5.14 to 6.12
glibc has been updated from 2.34 to 2.39


That's not a too big jump. Ubuntu 24.04 us running the same glibc 2.39 and a newer Linux Kernel 6.8.


I would not think many customers want to move immediately. But I have added the UBI 10 image already to the container project.

It's not yet the default, but you can select it via


-from=ubi10-minimal

-from=ubi10


This will select the base new UBI 10 base image:


registry.access.redhat.com/ubi10/ubi-minimal

registry.access.redhat.com/ubi10


For containers the kernel version is the kernel version of the host.

Only the new glibc would be involved here and the kernel would stay the same.


I am going to switch the base image on all of my deployments to UBI 10 for a production level test.





Optimizing existing code - a bash example

Daniel Nashed – 1 June 2025 07:11:34

Once code is implemented and works, it is usually not looked at again.
Often performance issues come up later when more data is added to an application.

This isn't just true for normal applications, but also for bash scripts as the following example shows.


The script already had verbose logging, so I could figure out quickly which part of the script takes longer.
But it wasn't clear immediately what the issue was.


Pasting code into ChatGPT asking the right questions gave me a good indication.
The code was parsing strings in a loop. For a single invocation for a full file using cut would be a good way to parse.


But it turned out that invoking cut for every operation took quite some overhead and high CPU spikes.
ChatGPT had an interesting suggestion, which did not work initially, but gave me a good direction.


I replaced the invocation of cut by internal bash parsing. This does not only reduce the overhead in CPU but is also dramatic faster.
Analyzing and re-factoring code can be very beneficial. But there needs to be potential in the optimization.
In may case it was simple. The server CPU spiked up for like 30 seconds on that Linux machine just for a bash script running to rebuild some HTML code.


I wasn't aware of the internal shell way to split strings this way into an array.

So asking ChatGPT and validating the ideas coming back can be very helpful.


But on the other side all of this only makes sense if there is optimization potential.

In my case it was easy to spot and address and I have other areas in another script that might benefit from the same type optimization.



Existing Code invoking an external command "cut"


while read LINE; do


 ENTRY=$(echo "$LINE" | cut -d'|' -f1)

 CATEGORY=$(echo "$ENTRY" | cut -d'/' -f1)

 SUB=$(echo "$ENTRY" | cut -d'/' -f2)

 COMBINED=${CATEGORY}_${SUB}

 FILE=$(echo "$LINE" | cut -d'|' -f2)

 DESCRIPTION=$(echo "$LINE" | cut -d'|' -f3)

 HASH=$(echo "$LINE" | cut -d'|' -f4)


 html_entry "$COMBINED.html" "$FILE" "$FILE" "$DESCRIPTION" "$HASH" "$SERVER_URL"


done < "$CATALOG_FILE"



New Code leveraging bash internal functions


while read LINE; do


 IFS='|' read -r -a PARTS <<< "$LINE"


 ENTRY=${PARTS[0]}

 FILE=${PARTS[1]}

 DESCRIPTION=${PARTS[2]}

 HASH=${PARTS[3]}


 IFS='/' read -r -a PARTS <<< "$ENTRY"


 CATEGORY=${PARTS[0]}

 SUB=${PARTS[1]}

 COMBINED=${CATEGORY}_${SUB}


 html_entry "$COMBINED.html" "$FILE" "$FILE" "$DESCRIPTION" "$HASH" "$SERVER_URL"


done < "$CATALOG_FILE"


New Domino on Linux diagnostic script

Daniel Nashed – 27 May 2025 23:04:00

This is still work in progress. But I am working on it since Engage.
I had a server hang in the middle of the night, which was hard to to troubleshoot from remote from my notebook.

it would not have been easier on a Windows machine. But now it is going to be easier on Linux than on Windows.
I am adding a diagnostic menu the Domino Start script. It's going to be a separate script called from the start script.

The idea is to collect data and have an alternate way to transfer data -- even if the Domino server is down.
First transfer is option is using SMTP mail via nshmailx.

But if the files are getting bigger we might need another option. For example SCP.
Using SCP would not require nshmailx, but would not be a convenient.

Maybe an upload agent would be a good idea, but would need an agent.
There is probably not a once size that fits all.

What do you think?


Image:New Domino on Linux diagnostic script

Engage session follow-up – Domino 14.5 AutoUpdate downloads

Daniel Nashed – 24 May 2025 17:40:45
Thanks to everyone who attended my 8:00 AM session on Wednesday. One topic raised during the session deserves a closer look: how Domino AutoUpdate retrieves installation artifacts.

To download product.jwt, software.jwt, and the Notes/Domino web kits, you need at least one server with outbound connectivity to My HCL Software portal (MHS) and the HCL Domino fixlist servers.

Domino AutoUpdate a supports HTTP proxy configurations, including authenticated proxies, which should work in most enterprise network environments.
All downloads are validated against the software.jwt, which includes signed metadata for all supported software packages. This model fits most connected environments.

Completely air-gapped setups are uncommon, and to date, there haven’t been strong or clearly defined requirements for full offline AutoUpdate workflows.

However, it’s still possible to override download URLs in AutoUpdate documents to manually provide software.jwt and web kits from internal sources.

To support these scenarios, I created an NGINX-based download proxy that utilizes the documented MHS API.


Initially developed for the Domino Download Script, the proxy has evolved into a flexible tool that can be deployed in several modes:

  • Internal software distribution portal
  • Backend data source for the Domino Download Script
  • Transparent proxy simulating the MHS API for Domino AutoUpdate

This makes it well-suited for secure environments requiring additional controls like antivirus scanning or staging downloads.


My personal use cases is all of the above to cache downloaded web-kits and host them from a local server.



I really appreciate your feedback and would like to hear about your specific requirements.
Please open a GitHub issue here for specific feedback.

GitHub Issues – Domino Start Script Project

https://github.com/nashcom/domino-startscript/issues

For very specific requirements, which cannot be discussed in public, ping me offline.



Related links:

Blog post introducing the download server:

https://blog.nashcom.de/nashcomblog.nsf/dx/new-project-domino-download-server.htm
Domino Download Server project on GitHub:

https://github.com/nashcom/domino-startscript/tree/main/domdownload-server


 GitHub 

Nash!Com GitHub organization profile for open source projects

Daniel Nashed – 17 May 2025 18:31:52

There are a couple of open source projects I am involved with.

I have added a page to the organization as a quick overview of my services and my open source work.

This also includes an overview of the HCL open source projects I am involved with.


https://github.com/nashcom/

Many of those projects are Linux and container focused.

I added this overview in preparation of Engage next week.


If you are interested in Domino on Linux you should join us for the Linux round table next week.


-- Daniel



 CentOS  SELInux  SSH 

CentOS 9 Stream update broke my SSH server with custom port because of SELinux

Daniel Nashed – 17 May 2025 16:32:58

I just patched my CentOS 9 Stream server to the latest version.
The server came up, but SSH did not work any more.

It turned out that the SELinux enforced mode in combination with the policies for sshd was responsible for it.
My server runs on a custom SSH port.
I had to add that port to my SELinux condfiguration. Let's assume you want to add 123.

You would need to allow the port like this:

semanage port -a -p tcp -t ssh_port_t 123

But first you need to make sure you have the enforced SELinux mode at all with this command:

getenforce
Enforcing


You should check the SELinux settings for the SSH port before and after the change via:

semanage port -l | grep ssh

I have not seen this on any other update like Ubuntu.
But the latest CentOS patches caused this to one of my servers.

Maybe this helps in one or another case.

I am migrating most of my servers to Ubuntu. But I am keeping some for testing.

-- Daniel



Tool chain security dependencies in containers

Daniel Nashed – 17 May 2025 15:02:09

When building your own software from scratch with a small number of dependencies like OpenSSL, LibCurl on current Linux versions is straightforward.
But as soon you are adding external projects to your stack, would bring up more dependencies which can raise security challenges.


In the container world there is strict vulnerability scanning.  Stacks like https://www.chainguard.dev/ provide great options to keep the stack you are building on secure.
But you might have external projects you rely on. You usually don't want to build everything from scratch.


Example for a dependency


The Prometheus Node Exporter is an optional component of the Domino container image.
It turns out it is built with Go, which can introduce vulnerabilities when they are not up to date.

Even if the project manages all it's dependencies, and older version of the application might have older versions for example of Go statically linked.
Linking Go statically is a common practice to not install the run-time environment on the target environment.
In my particular case the Node Exporter was outdated and a newer version comes with a newer Go run-time statically linked.


Container scan tools


The good news is that Docker Scout and other vulnerability testing shows up the CVEs and in which version they are fixed.
glibc
is dynamically linked and patching depending on the run-time. For a Linux machine this would be a normal update.

For a container image it would mean re-building the image with the latest Linux updates.
As a good practice each software should show the version of the tool chain it was developed and is running on.
In this example you see the updated run-time for the current Node Exporter, which fixes the reported CVEs.


Conclusion
     
  • As a developer you have to be ware of your dependencies and closely watch them
  • If it is reasonable to link dynamically, it can make a lot of sense
  • But if you expect the target has older versions, it might be better to include them
    (for example Domino bundles the latest versions of OpenSSL, which are usually never than what Linux ships)

  • When running containers you should scan the images and ensure you are running the latest versions
  • Making it easy for an admin to query all the dependencies is important as you see from the Node Exporter example


I have just updated the container image to use the latest Node Exporter.

Example Node Exporter


node_exporter --version
node_exporter, version 1.9.1 (branch: HEAD, revision: f2ec547b49af53815038a50265aa2adcd1275959)

 build user:       root@7023beaa563a

 build date:       20250401-15:19:01

 go version:       go1.23.7

 platform:         linux/amd64




Image:Tool chain security dependencies in containers


Links

    Archives


    • [HCL Domino]
    • [Domino on Linux]
    • [Nash!Com]
    • [Daniel Nashed]